The existence of metallic implants in projection images for cone-beam computed tomography (CBCT) introduces undesired artifacts which degrade the quality of reconstructed images. In order to reduce metal artifacts, projection inpainting is an essential step in many metal artifact reduction algorithms. In this work, a hybrid network combining the shift window (Swin) vision transformer (ViT) and a convolutional neural network is proposed as a baseline network for the inpainting task. To incorporate metal information for the Swin ViT-based encoder, metal-conscious self-embedding and neighborhood-embedding methods are investigated. Both methods have improved the performance of the baseline network. Furthermore, by choosing appropriate window size, the model with neighborhood-embedding could achieve the lowest mean absolute error of 0.079 in metal regions and the highest peak signal-to-noise ratio of 42.346 in CBCT projections. At the end, the efficiency of metal-conscious embedding on both simulated and real cadaver CBCT data has been demonstrated, where the inpainting capability of the baseline network has been enhanced.
translated by 谷歌翻译
Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https://github.com/ict-bigdatalab/VNEL.
translated by 谷歌翻译
知识密集型语言任务(苏格兰信)通常需要大量信息来提供正确的答案。解决此问题的一种流行范式是将搜索系统与机器读取器相结合,前者检索支持证据,后者检查它们以产生答案。最近,读者组成部分在大规模预培养的生成模型的帮助下见证了重大进展。同时,搜索组件中的大多数现有解决方案都依赖于传统的``索引 - retrieve-then-Rank''管道,该管道遭受了巨大的内存足迹和端到端优化的困难。受到最新构建基于模型的IR模型的努力的启发,我们建议用新颖的单步生成模型替换传统的多步搜索管道,该模型可以极大地简化搜索过程并以端到端的方式进行优化。我们表明,可以通过一组经过适当设计的预训练任务来学习强大的生成检索模型,并被采用以通过进一步的微调来改善各种下游苏格兰短裙任务。我们将预训练的生成检索模型命名为Copusbrain,因为有关该语料库的所有信息均以其参数进行编码,而无需构造其他索引。经验结果表明,在苏格兰语基准上的检索任务并建立了新的最新性能,Copusbrain可以极大地超过强大的基准。我们还表明,在零农源和低资源设置下,科体班运行良好。
translated by 谷歌翻译
基准标记通常用于导航辅助微创脊柱手术(Miss),他们帮助将图像坐标转移到现实世界坐标中。在实践中,这些标记可能位于视野(FOV)之外,由于术中手术中使用的C形臂锥形束计算机断层扫描(CBCT)系统的有限检测器尺寸。因此,CBCT体积中的重建标记遭受伪影并且具有扭曲的形状,其设定了导航的障碍。在这项工作中,我们提出了两个基准标记检测方法:直接检测从失真标记(直接方法)和标记恢复后检测(恢复方法)。为了直接检测重构体积中的失真标记,提出了一种使用两个神经网络和传统圆检测算法的有效的自动标记检测方法。对于标记恢复,提出了一种特定于任务的学习策略,以从严重截断的数据中恢复标记。之后,施加传统的标记检测算法用于位置检测。在模拟数据和实际数据上评估这两种方法,两者都可以实现小于0.2mm的标记配准误差。我们的实验表明,直接方法能够准确地检测扭曲的标记,并且具有任务特定学习的恢复方法对各种数据集具有高的鲁棒性和概括性。此外,特定于任务的学习能够准确地重建其他感兴趣的结构结构,例如,用于图像引导针活检的肋骨,来自严重截断的数据,从而使CBCT系统具有新的潜在应用。
translated by 谷歌翻译
This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size $1\times1$. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs. The MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/BiMLP}.
translated by 谷歌翻译
背景:基于学习的深度颈部淋巴结水平(HN_LNL)自动纤维与放射疗法研究和临床治疗计划具有很高的相关性,但在学术文献中仍被研究过。方法:使用35个规划CTS的专家划分的队列用于培训NNU-NEN 3D FULLES/2D-ENEBLEN模型,用于自动分片20不同的HN_LNL。验证是在独立的测试集(n = 20)中进行的。在一项完全盲目的评估中,3位临床专家在与专家创建的轮廓的正面比较中对深度学习自动分类的质量进行了评价。对于10个病例的亚组,将观察者内的变异性与深度学习自动分量性能进行了比较。研究了Autocontour与CT片平面方向的一致性对几何精度和专家评级的影响。结果:与专家创建的轮廓相比,对CT SLICE平面调整的深度学习分割的平均盲目专家评级明显好得多(81.0 vs. 79.6,p <0.001),但没有切片平面的深度学习段的评分明显差。专家创建的轮廓(77.2 vs. 79.6,p <0.001)。深度学习分割的几何准确性与观察者内变异性(平均骰子,0.78 vs. 0.77,p = 0.064)的几何准确性无关,并且在提高水平之间的准确性方面差异(p <0.001)。与CT切片平面方向一致性的临床意义未由几何精度指标(骰子,0.78 vs. 0.78 vs. 0.78,p = 0.572)结论:我们表明可以将NNU-NENE-NET 3D-FULLRES/2D-ENEMELBEND用于HN_LNL高度准确的自动限制仅使用有限的培训数据集,该数据集非常适合在研究环境中在HN_LNL的大规模标准化自动限制。几何准确度指标只是盲人专家评级的不完善的替代品。
translated by 谷歌翻译
低剂量计算机断层扫描(CT)降级算法旨在使常规CT采集中的患者剂量减少,同时保持高图像质量。最近,引入了深度学习〜(DL)的方法,由于其高模型容量,因此在此任务上的常规降级算法优于常规deno。但是,为了过渡基于DL的denoing到临床实践,这些数据驱动的方法必须超越可见的训练数据来概括地概括。因此,我们提出了一种由一组可训练的联合双边滤波器(JBF)组成的混合脱糖性方法,并结合了基于卷积DL的deNoising网络,以预测指导图像。我们提出的denoising管道结合了通过基于DL的功能提取和常规JBF的可靠性启用的高模型容量。通过在没有金属植入物的腹部CT扫描上进行训练以及对金属植入物以及头部CT数据进行腹部扫描测试,可以证明该管道的概括能力。当我们的管道中嵌入两个基于DL的DENOISER(RED-CNN/QAE)时,Denoisis的性能提高了$ 10 \,\%$/$ 82 \,\%$(RMSE)和$ 3 \,\%$ /$ 81 \,\%$(psnr)在包含金属的区域和$ 6 \,\%$/$ 78 \,\%$(rmse)和$ 2 \,\%$/$ 4 \,\%$(psnr)上与各自的香草模型相比,头部CT数据。最后,提出的可训练的JBFS限制了深神经网络的误差结合,以促进基于DL的DeOisers在低剂量CT管道中的适用性。
translated by 谷歌翻译
Continually learning to segment more and more types of image regions is a desired capability for many intelligent systems. However, such continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning. While multiple knowledge distillation strategies originally for continual classification have been well adapted to continual semantic segmentation, they only consider transferring old knowledge based on the outputs from one or more layers of deep fully convolutional networks. Different from existing solutions, this study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements (Eg. pixels or small local regions) within each image which can capture both within-class and between-class knowledge. The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model. Considering that pixels belonging to the same class in each image often share similar visual properties, a class-specific region pooling is applied to provide more efficient relationship information for knowledge transfer. Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
translated by 谷歌翻译
脑转移经常发生在转移性癌症的患者中。早期和准确地检测脑转移对于放射治疗的治疗计划和预后至关重要。为了提高深入学习的脑转移检测性能,提出了一种称为体积级灵敏度特异性(VSS)的定制检测损失,该损失是单个转移检测灵敏度和(子)体积水平的特异性。作为敏感性和精度始终在转移水平中始终是折射率,可以通过调节VSS损耗中的重量而无需骰子分数系数进行分段转移来实现高精度或高精度。为了减少被检测为假阳性转移的转移样结构,提出了一种时间的现有量作为神经网络的额外输入。我们提出的VSS损失提高了脑转移检测的敏感性,将灵敏度提高了86.7%至95.5%。或者,它将精度提高了68.8%至97.8%。随着额外的时间现有量,在高灵敏度模型中,约45%的假阳性转移减少,高特异性模型的精度达到99.6%。所有转移的平均骰子系数约为0.81。随着高灵敏度和高特异性模型的集合,平均每位患者的1.5个假阳性转移需要进一步检查,而大多数真正的阳性转移确认。该集合学习能够区分从需要特殊专家审查或进一步跟进的转移候选人的高信心真正的阳性转移,特别适合实际临床实践中专家支持的要求。
translated by 谷歌翻译
我们研究了在联合环境中从积极和未标记的(PU)数据中学习的问题,由于资源和时间的限制,每个客户仅标记其数据集的一小部分。与传统的PU学习中的设置不同,负面类是由单个类组成的,而由客户在联合设置中无法识别的否定样本可能来自客户未知的多个类。因此,在这种情况下,几乎无法应用现有的PU学习方法。为了解决这个问题,我们提出了一个新颖的框架,即使用正面和未标记的数据(FEDPU)联合学习,以通过利用其他客户的标记数据来最大程度地降低多个负面类别的预期风险。我们理论上分析了拟议的FedPU的概括结合。经验实验表明,FedPU比常规监督和半监督联盟的学习方法取得更好的性能。
translated by 谷歌翻译